变形金刚在几个领域取得了巨大的成功,从自然语言处理到计算机视觉。然而,最近已经证明,堆叠自发注意层(变压器的独特架构成分)可能会导致在初始化时代币表示的等级崩溃。是否以及如何影响训练的等级崩溃的问题仍然没有得到答复,其调查对于对该架构的更全面理解是必要的。在这项工作中,我们对这种现象的原因和影响有了新的启示。首先,我们表明,代币表示的等级崩溃会导致查询和钥匙的梯度在初始化时消失,从而阻碍了培训。此外,我们提供了对等级崩溃的起源的详尽描述,并讨论了如何通过对残留分支的适当深度依赖性缩放来预防它。最后,我们的分析揭示了特定的体系结构超参数对查询和值的梯度有所不同,从而导致不成比例的梯度规范。这暗示了一种解释,用于广泛使用自适应方法进行变压器的优化。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
近年来,深度学习(DL)算法的使用改善了基于视觉的空间应用的性能。但是,生成大量的注释数据来培训这些DL算法已被证明具有挑战性。虽然可以使用合成生成的图像,但在实际环境中测试时,经过合成数据训练的DL模型通常容易受到性能降解。在这种情况下,卢森堡大学的安全,可靠性和信任(SNT)跨学科中心开发了“ SNT Zero-G Lab”,用于在模拟现实世界太空环境的条件下培训和验证基于视觉的空间算法。 SNT Zero-G实验室开发的一个重要方面是设备选择。从实验室开发过程中学到的经验教训,本文提出了一种系统的方法,将市场调查和设备选择的实验分析结合在一起。特别是,本文专注于太空实验室中的图像采集设备:背景材料,相机和照明灯。实验分析的结果表明,在太空实验室开发项目中选择有效的设备选择需要通过实验分析来称赞的市场调查。
translated by 谷歌翻译
变压器已成为自然兰格格处理和视觉中许多任务的首选模型。在更有效地进行培训和部署变压器的最新努力已经确定了许多策略,以近似自我发挥作用矩阵,这是变压器体系结构中的关键模块。有效的想法包括各种预先指定的稀疏模式,低级基础扩展及其组合。在本文中,我们重新访问了小波等经典多分辨率分析(MRA)概念,在这种情况下,在这种情况下的潜在价值迄今仍未被逐渐解散。我们表明,基于现代硬件和实施挑战所告知的经验反馈和设计选择的简单近似值,最终在大多数感兴趣的标准中产生了基于MRA的自我注意力方法,具有出色的性能。我们进行了一系列广泛的实验,并证明该多分辨率方案的表现优于最有效的自我注意力建议,并且对短序列和长序列都有利。代码可在\ url {https://github.com/mlpen/mra-witchention}中获得。
translated by 谷歌翻译
最近的立法导致对机器学习的兴趣,即从预测模型中删除特定的培训样本,就好像它们在培训数据集中从未存在。由于损坏/对抗性数据或仅仅是用户的更新隐私要求,也可能需要进行学习。对于不需要培训的模型(K-NN),只需删除最近的原始样品即可有效。但是,这个想法不适合学习更丰富的表示的模型。由于模型维度D的趋势,最新的想法利用了基于优化的更新,因为损失函数的Hessian颠倒了。我们使用新的条件独立系数L-CODEC的变体来识别模型参数的子集,其语义重叠在单个样本级别上。我们的方法完全避免了将(可能)巨大矩阵倒置的必要性。通过利用马尔可夫毯子的选择,我们前提是l-codec也适合深度学习以及视觉中的其他应用。与替代方案相比,L-Codec在原本是不可行的设置中可以实现近似学习,包括用于面部识别的视觉模型,人重新识别和可能需要未经学习的样品进行排除的NLP模型。代码可以在https://github.com/vsingh-group/lcodec-deep-unlearning/
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
Object movement identification is one of the most researched problems in the field of computer vision. In this task, we try to classify a pixel as foreground or background. Even though numerous traditional machine learning and deep learning methods already exist for this problem, the two major issues with most of them are the need for large amounts of ground truth data and their inferior performance on unseen videos. Since every pixel of every frame has to be labeled, acquiring large amounts of data for these techniques gets rather expensive. Recently, Zhao et al. [1] proposed one of a kind Arithmetic Distribution Neural Network (ADNN) for universal background subtraction which utilizes probability information from the histogram of temporal pixels and achieves promising results. Building onto this work, we developed an intelligent video surveillance system that uses ADNN architecture for motion detection, trims the video with parts only containing motion, and performs anomaly detection on the trimmed video.
translated by 谷歌翻译
The machine translation mechanism translates texts automatically between different natural languages, and Neural Machine Translation (NMT) has gained attention for its rational context analysis and fluent translation accuracy. However, processing low-resource languages that lack relevant training attributes like supervised data is a current challenge for Natural Language Processing (NLP). We incorporated a technique known Active Learning with the NMT toolkit Joey NMT to reach sufficient accuracy and robust predictions of low-resource language translation. With active learning, a semi-supervised machine learning strategy, the training algorithm determines which unlabeled data would be the most beneficial for obtaining labels using selected query techniques. We implemented two model-driven acquisition functions for selecting the samples to be validated. This work uses transformer-based NMT systems; baseline model (BM), fully trained model (FTM) , active learning least confidence based model (ALLCM), and active learning margin sampling based model (ALMSM) when translating English to Hindi. The Bilingual Evaluation Understudy (BLEU) metric has been used to evaluate system results. The BLEU scores of BM, FTM, ALLCM and ALMSM systems are 16.26, 22.56 , 24.54, and 24.20, respectively. The findings in this paper demonstrate that active learning techniques helps the model to converge early and improve the overall quality of the translation system.
translated by 谷歌翻译
We study the problem of planning under model uncertainty in an online meta-reinforcement learning (RL) setting where an agent is presented with a sequence of related tasks with limited interactions per task. The agent can use its experience in each task and across tasks to estimate both the transition model and the distribution over tasks. We propose an algorithm to meta-learn the underlying structure across tasks, utilize it to plan in each task, and upper-bound the regret of the planning loss. Our bound suggests that the average regret over tasks decreases as the number of tasks increases and as the tasks are more similar. In the classical single-task setting, it is known that the planning horizon should depend on the estimated model's accuracy, that is, on the number of samples within task. We generalize this finding to meta-RL and study this dependence of planning horizons on the number of tasks. Based on our theoretical findings, we derive heuristics for selecting slowly increasing discount factors, and we validate its significance empirically.
translated by 谷歌翻译
As language models have grown in parameters and layers, it has become much harder to train and infer with them on single GPUs. This is severely restricting the availability of large language models such as GPT-3, BERT-Large, and many others. A common technique to solve this problem is pruning the network architecture by removing transformer heads, fully-connected weights, and other modules. The main challenge is to discern the important parameters from the less important ones. Our goal is to find strong metrics for identifying such parameters. We thus propose two strategies: Cam-Cut based on the GradCAM interpretations, and Smooth-Cut based on the SmoothGrad, for calculating the importance scores. Through this work, we show that our scoring functions are able to assign more relevant task-based scores to the network parameters, and thus both our pruning approaches significantly outperform the standard weight and gradient-based strategies, especially at higher compression ratios in BERT-based models. We also analyze our pruning masks and find them to be significantly different from the ones obtained using standard metrics.
translated by 谷歌翻译